497 research outputs found
Non-distributional Word Vector Representations
Data-driven representation learning for words is a technique of central
importance in NLP. While indisputably useful as a source of features in
downstream tasks, such vectors tend to consist of uninterpretable components
whose relationship to the categories of traditional lexical semantic theories
is tenuous at best. We present a method for constructing interpretable word
vectors from hand-crafted linguistic resources like WordNet, FrameNet etc.
These vectors are binary (i.e, contain only 0 and 1) and are 99.9% sparse. We
analyze their performance on state-of-the-art evaluation methods for
distributional models of word vectors and find they are competitive to standard
distributional approaches.Comment: Proceedings of ACL 201
Correlation-based Intrinsic Evaluation of Word Vector Representations
We introduce QVEC-CCA--an intrinsic evaluation metric for word vector
representations based on correlations of learned vectors with features
extracted from linguistic resources. We show that QVEC-CCA scores are an
effective proxy for a range of extrinsic semantic and syntactic tasks. We also
show that the proposed evaluation obtains higher and more consistent
correlations with downstream tasks, compared to existing approaches to
intrinsic evaluation of word vectors that are based on word similarity.Comment: RepEval 2016, 5 page
Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling
Fixed-vocabulary language models fail to account for one of the most
characteristic statistical facts of natural language: the frequent creation and
reuse of new word types. Although character-level language models offer a
partial solution in that they can create word types not attested in the
training corpus, they do not capture the "bursty" distribution of such words.
In this paper, we augment a hierarchical LSTM language model that generates
sequences of word tokens character by character with a caching mechanism that
learns to reuse previously generated words. To validate our model we construct
a new open-vocabulary language modeling corpus (the Multilingual Wikipedia
Corpus, MWC) from comparable Wikipedia articles in 7 typologically diverse
languages and demonstrate the effectiveness of our model across this range of
languages.Comment: ACL 201
Conditional Random Field Autoencoders for Unsupervised Structured Prediction
We introduce a framework for unsupervised learning of structured predictors
with overlapping, global features. Each input's latent representation is
predicted conditional on the observable data using a feature-rich conditional
random field. Then a reconstruction of the input is (re)generated, conditional
on the latent structure, using models for which maximum likelihood estimation
has a closed-form. Our autoencoder formulation enables efficient learning
without making unrealistic independence assumptions or restricting the kinds of
features that can be used. We illustrate insightful connections to traditional
autoencoders, posterior regularization and multi-view learning. We show
competitive results with instantiations of the model for two canonical NLP
tasks: part-of-speech induction and bitext word alignment, and show that
training our model can be substantially more efficient than comparable
feature-rich baselines
- …